11 - Deep Learning [ID:11862]
50 von 1246 angezeigt

Good morning to this very vast audience this morning.

And I think last week you didn't completely go through the topic of unsupervised learning.

So we're going to start with that.

So again, good morning.

So last week you talked about unsupervised techniques, how to work with data where you

don't have labels.

So for example, you talked about autoencoders and their variational, variant, variational

autoencoders.

You talked a lot about GANs and we're kind of in that field right now.

So GANs are these concepts where you have a discriminator and a generator and they play

a mini max game.

So they are trying to optimize each other basically during training.

And GANs are sometimes not that easy to train, but they also have a lot of capabilities in

other fields.

So I think you stopped last time around here.

So GANs, for example, can be used for semi-supervised training.

So when you have labels for some part of your data, but not for others.

And one way to do that is, for example, by turning a K-class problem.

So in your classification problem, you would originally have K-classes to turn that into

a K plus one problem.

So the idea is that the true classes represent these first K and then the last one is basically

the fake input that is generated by the generator.

And the discriminator now has to decide which of these original K-classes is it or is it

actually the fake class that was generated by the generator.

And for all the data that you have labels, you can do this original classification in

these K plus one classes.

And for data where you don't have any labels, the discriminator just has to decide whether

it's a true sample or whether it's a generated sample.

Now in contrast to normal GANs where you're usually interested in what the generator produces,

so these fancy images that you've seen, GANs for semi-supervised learning work a little

bit different because here you're interested in the discriminator's performance.

So this is what you want to optimize here.

So still quite an interesting and powerful technique, I would say.

Now instead of learning your output image of the generator just in one go, which can

be very difficult for high resolution images, for low resolution images GANs usually yield

impressive results.

For high resolution images it's a little bit more difficult.

So instead of doing everything at once, so high resolution in just one go, you could

think about subsequently increasing the resolution in your network.

So for example, you can use a form of Laplacian pyramids so that you iteratively increase

the resolution and iteratively increase the details in your output image.

So as previously we have noise as input, but in addition to this noise we use an upsampled

output from the previous step in our Laplacian pyramid.

This is basically then a conditional variable, so this is some form of advanced conditional

GAN.

Now what we want to generate is not just a higher resolution image, but we want to generate

the difference image.

So you can see that nicely here.

In the beginning we start with a very low resolution image that the generator then basically

improves and here we see this difference image that we add to our original representation.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:26:03 Min

Aufnahmedatum

2019-07-18

Hochgeladen am

2019-07-18 19:59:03

Sprache

en-US

The slides of the first six minutes unfortunately could not be recorded.

Die Folien der ersten sechs Minuten konnten leider nicht aufgezeichnet werden.

Tags

feature context image layer network decoder object detection upsampling path encoder
Einbetten
Wordpress FAU Plugin
iFrame
Teilen